Decades of progress in simulation-based surrogate-assisted optimization and unprecedented growth in computational power have enabled researchers and practitioners to optimize previously intractable complex engineering problems. This paper investigates the possible benefit of a concurrent utilization of multiple simulation-based surrogate models to solve complex discrete optimization problems. To fulfill this, the so-called Self-Adaptive Multi-surrogate Assisted Efficient Global Optimization algorithm (SAMA-DiEGO), which features a two-stage online model management strategy, is proposed and further benchmarked on fifteen binary-encoded combinatorial and fifteen ordinal problems against several state-of-the-art non-surrogate or single surrogate assisted optimization algorithms. Our findings indicate that SAMA-DiEGO can rapidly converge to better solutions on a majority of the test problems, which shows the feasibility and advantage of using multiple surrogate models in optimizing discrete problems.
translated by 谷歌翻译
Benchmarking is a key aspect of research into optimization algorithms, and as such the way in which the most popular benchmark suites are designed implicitly guides some parts of algorithm design. One of these suites is the black-box optimization benchmarking (BBOB) suite of 24 single-objective noiseless functions, which has been a standard for over a decade. Within this problem suite, different instances of a single problem can be created, which is beneficial for testing the stability and invariance of algorithms under transformations. In this paper, we investigate the BBOB instance creation protocol by considering a set of 500 instances for each BBOB problem. Using exploratory landscape analysis, we show that the distribution of landscape features across BBOB instances is highly diverse for a large set of problems. In addition, we run a set of eight algorithms across these 500 instances, and investigate for which cases statistically significant differences in performance occur. We argue that, while the transformations applied in BBOB instances do indeed seem to preserve the high-level properties of the functions, their difference in practice should not be overlooked, particularly when treating the problems as box-constrained instead of unconstrained.
translated by 谷歌翻译
异常检测描述了发现与正常值空间不同的异常状态,实例或数据点的方法。工业流程是一个领域,需要在其中找到质量增强异常数据实例的预期模型。但是,主要的挑战是在这种环境中没有标签。本文有助于以数据为中心的工业生产中人工智能的方式。借助来自汽车组件的增材制造的用例,我们提出了基于深度学习的图像处理管道。我们将域随机化和合成数据的概念整合在循环中,这显示了深度学习进展及其在现实世界中的工业生产过程中的桥接结果。
translated by 谷歌翻译
ICECUBE是一种用于检测1 GEV和1 PEV之间大气和天体中微子的光学传感器的立方公斤阵列,该阵列已部署1.45 km至2.45 km的南极的冰盖表面以下1.45 km至2.45 km。来自ICE探测器的事件的分类和重建在ICeCube数据分析中起着核心作用。重建和分类事件是一个挑战,这是由于探测器的几何形状,不均匀的散射和冰中光的吸收,并且低于100 GEV的光,每个事件产生的信号光子数量相对较少。为了应对这一挑战,可以将ICECUBE事件表示为点云图形,并将图形神经网络(GNN)作为分类和重建方法。 GNN能够将中微子事件与宇宙射线背景区分开,对不同的中微子事件类型进行分类,并重建沉积的能量,方向和相互作用顶点。基于仿真,我们提供了1-100 GEV能量范围的比较与当前ICECUBE分析中使用的当前最新最大似然技术,包括已知系统不确定性的影响。对于中微子事件分类,与当前的IceCube方法相比,GNN以固定的假阳性速率(FPR)提高了信号效率的18%。另外,GNN在固定信号效率下将FPR的降低超过8(低于半百分比)。对于能源,方向和相互作用顶点的重建,与当前最大似然技术相比,分辨率平均提高了13%-20%。当在GPU上运行时,GNN能够以几乎是2.7 kHz的中位数ICECUBE触发速率的速率处理ICECUBE事件,这打开了在在线搜索瞬态事件中使用低能量中微子的可能性。
translated by 谷歌翻译
脑小血管疾病的成像标记提供了有关脑部健康的宝贵信息,但是它们的手动评估既耗时又受到实质性内部和间际变异性的阻碍。自动化评级可能受益于生物医学研究以及临床评估,但是现有算法的诊断可靠性尚不清楚。在这里,我们介绍了\ textIt {血管病变检测和分割}(\ textit {v textit {where valdo?})挑战,该挑战是在国际医学图像计算和计算机辅助干预措施(MICCAI)的卫星事件中运行的挑战(MICCAI) 2021.这一挑战旨在促进大脑小血管疾病的小而稀疏成像标记的自动检测和分割方法的开发,即周围空间扩大(EPVS)(任务1),脑微粒(任务2)和预先塑造的鞋类血管起源(任务3),同时利用弱和嘈杂的标签。总体而言,有12个团队参与了针对一个或多个任务的解决方案的挑战(任务1 -EPVS 4,任务2 -Microbleeds的9个,任务3 -lacunes的6个)。多方数据都用于培训和评估。结果表明,整个团队和跨任务的性能都有很大的差异,对于任务1- EPV和任务2-微型微型且对任务3 -lacunes尚无实际的结果,其结果尤其有望。它还强调了可能阻止个人级别使用的情况的性能不一致,同时仍证明在人群层面上有用。
translated by 谷歌翻译
可解释的人工智能(XAI)越来越多地用于分析神经网络的行为。概念激活使用人解剖概念来解释神经网络行为。这项研究旨在评估回归概念激活的可行性,以解释多模式体积数据的检测和分类。概念验证证明是在前列腺发射断层扫描/计算机断层扫描(PET/CT)成像的转移性前列腺癌患者中证明的。多模式的体积概念激活用于提供全球和局部解释。敏感性为80%,为每位患者的假阳性为1.78。全球解释表明,检测集中在CT上的解剖位置和PET上的检测信心。当地的解释显示出有望有助于区分真实积极因素和误报。因此,这项研究证明了使用回归概念激活来解释多模式体积数据的检测和分类的可行性。
translated by 谷歌翻译
人类决策受到许多系统错误的困扰。可以通过提供决策辅助工具来指导决策者参与重要信息并根据理性决策策略将其集成,从而避免使用这些错误。设计这样的决策辅助工具曾经是一个乏味的手动过程。认知科学的进步可能会使将来自动化这一过程。我们最近引入了机器学习方法,以自动发现人类决策的最佳策略,并自动向人们解释这些策略。通过这种方法构建的决策辅助工具能够改善人类决策。但是,遵循该方法产生的描述非常乏味。我们假设可以通过将自动发现的决策策略作为一系列自然语言指示来克服这个问题。实验1表明,人们确实确实比以前的方法更容易理解此类程序说明。在这一发现的鼓励下,我们开发了一种将我们先前方法的输出转化为程序指示的算法。我们应用了改进的方法来自动为自然主义计划任务(即计划旅行)和自然主义决策任务(即选择抵押)生成决策辅助工具。实验2表明,这些自动产生的决策AID可显着改善人们在计划公路旅行和选择抵押贷款方面的表现。这些发现表明,AI驱动的增强可能有可能改善现实世界中的人类决策。
translated by 谷歌翻译
放射线学使用定量医学成像特征来预测临床结果。目前,在新的临床应用中,必须通过启发式试验和纠正过程手动完成各种可用选项的最佳放射组方法。在这项研究中,我们提出了一个框架,以自动优化每个应用程序的放射线工作流程的构建。为此,我们将放射线学作为模块化工作流程,并为每个组件包含大量的常见算法。为了优化每个应用程序的工作流程,我们使用随机搜索和结合使用自动化机器学习。我们在十二个不同的临床应用中评估我们的方法,从而在曲线下导致以下区域:1)脂肪肉瘤(0.83); 2)脱粘型纤维瘤病(0.82); 3)原发性肝肿瘤(0.80); 4)胃肠道肿瘤(0.77); 5)结直肠肝转移(0.61); 6)黑色素瘤转移(0.45); 7)肝细胞癌(0.75); 8)肠系膜纤维化(0.80); 9)前列腺癌(0.72); 10)神经胶质瘤(0.71); 11)阿尔茨海默氏病(0.87);和12)头颈癌(0.84)。我们表明,我们的框架具有比较人类专家的竞争性能,优于放射线基线,并且表现相似或优于贝叶斯优化和更高级的合奏方法。最后,我们的方法完全自动优化了放射线工作流的构建,从而简化了在新应用程序中对放射线生物标志物的搜索。为了促进可重复性和未来的研究,我们公开发布了六个数据集,框架的软件实施以及重现这项研究的代码。
translated by 谷歌翻译
The analysis of network structure is essential to many scientific areas, ranging from biology to sociology. As the computational task of clustering these networks into partitions, i.e., solving the community detection problem, is generally NP-hard, heuristic solutions are indispensable. The exploration of expedient heuristics has led to the development of particularly promising approaches in the emerging technology of quantum computing. Motivated by the substantial hardware demands for all established quantum community detection approaches, we introduce a novel QUBO based approach that only needs number-of-nodes many qubits and is represented by a QUBO-matrix as sparse as the input graph's adjacency matrix. The substantial improvement on the sparsity of the QUBO-matrix, which is typically very dense in related work, is achieved through the novel concept of separation-nodes. Instead of assigning every node to a community directly, this approach relies on the identification of a separation-node set, which -- upon its removal from the graph -- yields a set of connected components, representing the core components of the communities. Employing a greedy heuristic to assign the nodes from the separation-node sets to the identified community cores, subsequent experimental results yield a proof of concept. This work hence displays a promising approach to NISQ ready quantum community detection, catalyzing the application of quantum computers for the network structure analysis of large scale, real world problem instances.
translated by 谷歌翻译
In recent years several learning approaches to point goal navigation in previously unseen environments have been proposed. They vary in the representations of the environments, problem decomposition, and experimental evaluation. In this work, we compare the state-of-the-art Deep Reinforcement Learning based approaches with Partially Observable Markov Decision Process (POMDP) formulation of the point goal navigation problem. We adapt the (POMDP) sub-goal framework proposed by [1] and modify the component that estimates frontier properties by using partial semantic maps of indoor scenes built from images' semantic segmentation. In addition to the well-known completeness of the model-based approach, we demonstrate that it is robust and efficient in that it leverages informative, learned properties of the frontiers compared to an optimistic frontier-based planner. We also demonstrate its data efficiency compared to the end-to-end deep reinforcement learning approaches. We compare our results against an optimistic planner, ANS and DD-PPO on Matterport3D dataset using the Habitat Simulator. We show comparable, though slightly worse performance than the SOTA DD-PPO approach, yet with far fewer data.
translated by 谷歌翻译